Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 297
Filtrar
3.
PLoS One ; 19(4): e0301818, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38593132

RESUMO

The widespread dissemination of misinformation on social media is a serious threat to global health. To a large extent, it is still unclear who actually shares health-related misinformation deliberately and accidentally. We conducted a large-scale online survey among 5,307 Facebook users in six sub-Saharan African countries, in which we collected information on sharing of fake news and truth discernment. We estimate the magnitude and determinants of deliberate and accidental sharing of misinformation related to three vaccines (HPV, polio, and COVID-19). In an OLS framework we relate the actual sharing of fake news to several socioeconomic characteristics (age, gender, employment status, education), social media consumption, personality factors and vaccine-related characteristics while controlling for country and vaccine-specific effects. We first show that actual sharing rates of fake news articles are substantially higher than those reported from developed countries and that most of the sharing occurs accidentally. Second, we reveal that the determinants of deliberate vs. accidental sharing differ. While deliberate sharing is related to being older and risk-loving, accidental sharing is associated with being older, male, and high levels of trust in institutions. Lastly, we demonstrate that the determinants of sharing differ by the adopted measure (intentions vs. actual sharing) which underscores the limitations of commonly used intention-based measures to derive insights about actual fake news sharing behaviour.


Assuntos
Infertilidade , Mídias Sociais , Vacinas , Humanos , Masculino , Desinformação , África Subsaariana/epidemiologia
4.
PLoS One ; 19(4): e0301364, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38630681

RESUMO

Although a rich academic literature examines the use of fake news by foreign actors for political manipulation, there is limited research on potential foreign intervention in capital markets. To address this gap, we construct a comprehensive database of (negative) fake news regarding U.S. firms by scraping prominent fact-checking sites. We identify the accounts that spread the news on Twitter (now X) and use machine-learning techniques to infer the geographic locations of these fake news spreaders. Our analysis reveals that corporate fake news is more likely than corporate non-fake news to be spread by foreign accounts. At the country level, corporate fake news is more likely to originate from African and Middle Eastern countries and tends to increase during periods of high geopolitical tension. At the firm level, firms operating in uncertain information environments and strategic industries are more likely to be targeted by foreign accounts. Overall, our findings provide initial evidence of foreign-originating misinformation in capital markets and thus have important policy implications.


Assuntos
Desinformação , Geografia , Bases de Dados Factuais , Indústrias
5.
PLoS One ; 19(3): e0299031, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38478479

RESUMO

Public comments are an important opinion for civic when the government establishes rules. However, recent AI can easily generate large quantities of disinformation, including fake public comments. We attempted to distinguish between human public comments and ChatGPT-generated public comments (including ChatGPT emulated that of humans) using Japanese stylometric analysis. Study 1 conducted multidimensional scaling (MDS) to compare 500 texts of five classes: Human public comments, GPT-3.5 and GPT-4 generated public comments only by presenting the titles of human public comments (i.e., zero-shot learning, GPTzero), GPT-3.5 and GPT-4 emulated by presenting sentences of human public comments and instructing to emulate that (i.e., one-shot learning, GPTone). The MDS results showed that the Japanese stylometric features of the public comments were completely different from those of the GPTzero-generated texts. Moreover, GPTone-generated public comments were closer to those of humans than those generated by GPTzero. In Study 2, the performance levels of the random forest (RF) classifier for distinguishing three classes (human, GPTzero, and GPTone texts). RF classifiers showed the best precision for the human public comments of approximately 90%, and the best precision for the fake public comments generated by GPT (GPTzero and GPTone) was 99.5% by focusing on integrated next writing style features: phrase patterns, parts-of-speech (POS) bigram and trigram, and function words. Therefore, the current study concluded that we could discriminate between GPT-generated fake public comments and those written by humans at the present time.


Assuntos
Desinformação , Aprendizagem , Humanos , Japão , Governo , Análise de Escalonamento Multidimensional
6.
PLoS One ; 19(3): e0300497, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38512834

RESUMO

Disinformation-false information intended to cause harm or for profit-is pervasive. While disinformation exists in several domains, one area with great potential for personal harm from disinformation is healthcare. The amount of disinformation about health issues on social media has grown dramatically over the past several years, particularly in response to the COVID-19 pandemic. The study described in this paper sought to determine the characteristics of multimedia social network posts that lead them to believe and potentially act on healthcare disinformation. The study was conducted in a neuroscience laboratory in early 2022. Twenty-six study participants each viewed a series of 20 either honest or dishonest social media posts, dealing with various aspects of healthcare. They were asked to determine if the posts were true or false and then to provide the reasoning behind their choices. Participant gaze was captured through eye tracking technology and investigated through "area of interest" analysis. This approach has the potential to discover the elements of disinformation that help convince the viewer a given post is true. Participants detected the true nature of the posts they were exposed to 69% of the time. Overall, the source of the post, whether its claims seemed reasonable, and the look and feel of the post were the most important reasons they cited for determining whether it was true or false. Based on the eye tracking data collected, the factors most associated with successfully detecting disinformation were the total number of fixations on key words and the total number of revisits to source information. The findings suggest the outlines of generalizations about why people believe online disinformation, suggesting a basis for the development of mid-range theory.


Assuntos
COVID-19 , Mídias Sociais , Humanos , Desinformação , Pandemias , Instalações de Saúde , Laboratórios , COVID-19/epidemiologia
7.
BMJ ; 384: q579, 2024 03 20.
Artigo em Inglês | MEDLINE | ID: mdl-38508671
8.
BMJ ; 384: e078538, 2024 03 20.
Artigo em Inglês | MEDLINE | ID: mdl-38508682

RESUMO

OBJECTIVES: To evaluate the effectiveness of safeguards to prevent large language models (LLMs) from being misused to generate health disinformation, and to evaluate the transparency of artificial intelligence (AI) developers regarding their risk mitigation processes against observed vulnerabilities. DESIGN: Repeated cross sectional analysis. SETTING: Publicly accessible LLMs. METHODS: In a repeated cross sectional analysis, four LLMs (via chatbots/assistant interfaces) were evaluated: OpenAI's GPT-4 (via ChatGPT and Microsoft's Copilot), Google's PaLM 2 and newly released Gemini Pro (via Bard), Anthropic's Claude 2 (via Poe), and Meta's Llama 2 (via HuggingChat). In September 2023, these LLMs were prompted to generate health disinformation on two topics: sunscreen as a cause of skin cancer and the alkaline diet as a cancer cure. Jailbreaking techniques (ie, attempts to bypass safeguards) were evaluated if required. For LLMs with observed safeguarding vulnerabilities, the processes for reporting outputs of concern were audited. 12 weeks after initial investigations, the disinformation generation capabilities of the LLMs were re-evaluated to assess any subsequent improvements in safeguards. MAIN OUTCOME MEASURES: The main outcome measures were whether safeguards prevented the generation of health disinformation, and the transparency of risk mitigation processes against health disinformation. RESULTS: Claude 2 (via Poe) declined 130 prompts submitted across the two study timepoints requesting the generation of content claiming that sunscreen causes skin cancer or that the alkaline diet is a cure for cancer, even with jailbreaking attempts. GPT-4 (via Copilot) initially refused to generate health disinformation, even with jailbreaking attempts-although this was not the case at 12 weeks. In contrast, GPT-4 (via ChatGPT), PaLM 2/Gemini Pro (via Bard), and Llama 2 (via HuggingChat) consistently generated health disinformation blogs. In September 2023 evaluations, these LLMs facilitated the generation of 113 unique cancer disinformation blogs, totalling more than 40 000 words, without requiring jailbreaking attempts. The refusal rate across the evaluation timepoints for these LLMs was only 5% (7 of 150), and as prompted the LLM generated blogs incorporated attention grabbing titles, authentic looking (fake or fictional) references, fabricated testimonials from patients and clinicians, and they targeted diverse demographic groups. Although each LLM evaluated had mechanisms to report observed outputs of concern, the developers did not respond when observations of vulnerabilities were reported. CONCLUSIONS: This study found that although effective safeguards are feasible to prevent LLMs from being misused to generate health disinformation, they were inconsistently implemented. Furthermore, effective processes for reporting safeguard problems were lacking. Enhanced regulation, transparency, and routine auditing are required to help prevent LLMs from contributing to the mass generation of health disinformation.


Assuntos
Camelídeos Americanos , Neoplasias Cutâneas , Humanos , Animais , Desinformação , Inteligência Artificial , Estudos Transversais , Protetores Solares , Idioma
9.
Artigo em Alemão | MEDLINE | ID: mdl-38332143

RESUMO

Misinformation and disinformation in social media have become a challenge for effective public health measures. Here, we examine factors that influence believing and sharing false information, both misinformation and disinformation, at individual, social, and contextual levels and discuss intervention possibilities.At the individual level, knowledge deficits, lack of skills, and emotional motivation have been associated with believing in false information. Lower health literacy, a conspiracy mindset and certain beliefs increase susceptibility to false information. At the social level, the credibility of information sources and social norms influence the sharing of false information. At the contextual level, emotions and the repetition of messages affect belief in and sharing of false information.Interventions at the individual level involve measures to improve knowledge and skills. At the social level, addressing social processes and social norms can reduce the sharing of false information. At the contextual level, regulatory approaches involving social networks is considered an important point of intervention.Social inequalities play an important role in the exposure to and processing of misinformation. It remains unclear to which degree the susceptibility to belief in and share misinformation is an individual characteristic and/or context dependent. Complex interventions are required that should take into account multiple influencing factors.


Assuntos
Comunicação em Saúde , Mídias Sociais , Humanos , Desinformação , 60713 , Alemanha , Comunicação
15.
Neural Netw ; 172: 106115, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38219679

RESUMO

With the proliferation of social media, the detection of fake news has become a critical issue that poses a significant threat to society. The dissemination of fake information can lead to social harm and damage the credibility of information. To address this issue, deep learning has emerged as a promising approach, especially with the development of Natural Language Processing (NLP). This study introduces a novel approach called Graph Global Attention Network with Memory (GANM) for detecting fake news. This approach leverages NLP techniques to encode nodes with news context and user content. It employs three graph convolutional networks to extract informative features from the news propagation network and aggregates endogenous and exogenous user information. This methodology aims to address the challenge of identifying fake news within the context of social media. Innovatively, the GANM combines two strategies. First, a novel global attention mechanism with memory is employed in the GANM to learn the structural homogeneity of news propagation networks, which is the attention mechanism of a single graph with a history of all graphs. Second, we design a module for partial key information learning aggregation to emphasize the acquisition of partial key information in the graph and merge node-level embeddings with graph-level embeddings into fine-grained joint information. Our proposed method provides a new direction in news detection research with a combination of global and partial information and achieves promising performance on real-world datasets.


Assuntos
Aprendizado Profundo , Mídias Sociais , Humanos , Desinformação , Redes Reguladoras de Genes , Processamento de Linguagem Natural
16.
Health Commun ; 39(3): 616-628, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36794382

RESUMO

Health-related misinformation is a major threat to public health and particularly worrisome for populations experiencing health disparities. This study sets out to examine the prevalence, socio-psychological predictors, and consequences of beliefs in COVID-19 vaccine misinformation among unvaccinated Black Americans. We conducted an online national survey with Black Americans who had not been vaccinated against COVID-19 (N = 800) between February and March 2021. Results showed that beliefs in COVID-19 vaccine misinformation were prevalent among unvaccinated Black Americans with 13-19% of participants agreeing or strongly agreeing with various false claims about COVID-19 vaccines and 35-55% unsure about the veracity of these claims. Conservative ideology, conspiracy thinking mind-set, religiosity, and racial consciousness in health care settings predicted greater beliefs in COVID-19 vaccine misinformation, which were associated with lower vaccine confidence and acceptance. Theoretical and practical implications of the findings are discussed.


Assuntos
Vacinas contra COVID-19 , COVID-19 , Conhecimentos, Atitudes e Prática em Saúde , Humanos , Negro ou Afro-Americano , COVID-19/epidemiologia , COVID-19/prevenção & controle , Vacinas contra COVID-19/uso terapêutico , Prevalência , Vacinação , Desinformação
17.
J Exp Psychol Appl ; 30(1): 16-32, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37227873

RESUMO

Sharing information in real time leaves little room for double-checking. This leads to an abundance of low-quality information that might later need to be corrected and provides a foundation on which false beliefs can arise. Today, the general population often consults digital media platforms for news content. Because of the sheer amount of news articles and the various ways digital media platforms organize material, readers may encounter news articles with faulty content and their subsequent corrections in various orders. They might read the misinformation before the corrected version or vice versa. We conducted two studies in which participants were presented with two reports of a news event: one report that included a piece of misinformation and one report in which that misinformation was retracted. The order in which the two reports were encountered was manipulated. In Study 1, the retraction contained an explicit reminder of the misinformation; in Study 2, it did not. Neither Study 1 nor Study 2 found an effect of presentation order on misinformation reliance. These findings run counter to predictions by those accounts of the continued influence effect that suggest a better encoding of retractions and subsequent lesser reliance on misinformation when retractions are encountered first. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Assuntos
Desinformação , Internet , Humanos , Comunicação , Leitura
18.
J Exp Psychol Appl ; 30(1): 33-47, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37902694

RESUMO

People are prone to forming false memories for fictitious events described in fake news stories. In this preregistered study, we hypothesized that the formation of false memories may be promoted when the fake news includes stereotypes that reflect positively on one's own nationality or negatively on another nationality. We exposed German and Irish participants (N = 1,184) to fabricated news stories that were consistent with positive or negative stereotypes about Germany and Ireland. The predicted three-way interaction was not observed. Exploratory follow-up analyses revealed the expected pattern of results for German participants but not for Irish participants, who were more likely to remember positive stories and stories about Ireland. Individual differences in patriotism did not significantly affect false memory rates; however, higher levels of cognitive ability and analytical reasoning decreased false memories and increased participants' ability to distinguish between true and false news stories. These results demonstrate that stereotypical information pertaining to national identity can influence the formation of false memories for fake news, but variations in cultural context may affect how misinformation is received and processed. We conclude by urging researchers to consider the sociopolitical and media landscape when predicting the consequences of fake news exposure. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Assuntos
Desinformação , Memória , Humanos , Rememoração Mental , Cognição , Alemanha
19.
JAMA Intern Med ; 184(1): 92-96, 2024 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-37955873

RESUMO

Importance: Although artificial intelligence (AI) offers many promises across modern medicine, it may carry a significant risk for the mass generation of targeted health disinformation. This poses an urgent threat toward public health initiatives and calls for rapid attention by health care professionals, AI developers, and regulators to ensure public safety. Observations: As an example, using a single publicly available large-language model, within 65 minutes, 102 distinct blog articles were generated that contained more than 17 000 words of disinformation related to vaccines and vaping. Each post was coercive and targeted at diverse societal groups, including young adults, young parents, older persons, pregnant people, and those with chronic health conditions. The blogs included fake patient and clinician testimonials and obeyed prompting for the inclusion of scientific-looking referencing. Additional generative AI tools created an accompanying 20 realistic images in less than 2 minutes. This process was undertaken by health care professionals and researchers with no specialized knowledge in bypassing AI guardrails, relying solely on publicly available information. Conclusions and Relevance: These observations demonstrate that when the guardrails of AI tools are insufficient, the ability to rapidly generate diverse and large amounts of convincing disinformation is profound. Beyond providing 2 example scenarios, these findings demonstrate an urgent need for robust AI vigilance. The AI tools are rapidly progressing; alongside these advancements, emergent risks are becoming increasingly apparent. Key pillars of pharmacovigilance-including transparency, surveillance, and regulation-may serve as valuable examples for managing these risks and safeguarding public health.


Assuntos
Inteligência Artificial , Desinformação , Feminino , Gravidez , Adulto Jovem , Humanos , Idoso , Idoso de 80 Anos ou mais , Vigília , Pessoal de Saúde , Conhecimento
20.
JAMA Intern Med ; 184(1): 96-97, 2024 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-37955920
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...